Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Pulmonary nodule detection based on feature pyramid networks
GAO Zhiyong, HUANG Jinzhen, DU Chenggang
Journal of Computer Applications    2020, 40 (9): 2571-2576.   DOI: 10.11772/j.issn.1001-9081.2019122122
Abstract468)      PDF (988KB)(539)       Save
Pulmonary nodules in Computerized Tomography (CT) images have large size variation as well as small and irregular size which leads to low detection sensitivity. In order to solve this problem, a method based on Feature Pyramid Network (FPN) was proposed. First, FPN was used to extract multi-scale features of nodules and strengthen the features of small objects and object boundary details. Second, a semantic segmentation network (named Mask FPN) was designed based on the FPN to segment and extract the pulmonary parenchyma quickly and accurately, and the pulmonary parenchyma area could be used as location map of object proposals. At the same time, a deconvolution layer was added on the top layer of FPN and a multi-scale prediction strategy was used to optimize the Faster Region Convolution Neural Network (R-CNN) in order to improve the performance of pulmonary nodule detection. Finally, to solve the problem of imbalance of positive and negative samples in the pulmonary nodule dataset, Focal Loss function was used in the Region Proposed Network (RPN) module in order to increase the detection rate of nodules. The proposed algorithm was tested on the public dataset LUNA16. Experimental results show that the improved network with FPN and deconvolution layer is helpful to the detection of pulmonary nodules, and focal loss function is also helpful to the detection. Combining with multiple improvements, when the average number of candidate nodules per scan was 46.7, the sensitivity of the presented method was 95.7%, which indicates that the method is more sensitive than the other convolutional networks such as Faster Region-Convolutional Neural Network (Faster R-CNN) and UNet. The proposed method can extract nodule features of different scales effectively and improve the detection sensitivity of pulmonary nodules in CT images. Meantime, the method can also detect small nodules effectively, which is beneficial to the diagnosis and treatment of lung cancer.
Reference | Related Articles | Metrics
Palm vein image recognition based on side chain connected convolution neural network
LOU Mengying, WANG Tianjing, LIU Yaqin, YANG Feng, HUANG Jing
Journal of Computer Applications    2020, 40 (12): 3673-3678.   DOI: 10.11772/j.issn.1001-9081.2020050667
Abstract279)      PDF (916KB)(360)       Save
To overcome the performance degradation of palm vein recognition system due to the small quantity and the uneven quality of palm vein images, a palm vein image recognition method based on side chain connected convolutional neural network was proposed. Firstly, palm vein features were extracted by convolution layer and pooling layer based on ResNet model. Secondly, the Exponential Linear Unit (ELU) activation function, Batch Normalization (BN) and Dropout technology were used to improve and optimize the model, so as to alleviate gradient disappear, prevent over fitting, speed up convergence and enhance the generalization ability of the model. Finally, Densely Connected Network (DenseNet) was introduced to make the extracted palm vein features more abundant and effective. Experimental results on two public databases and one self-built database show that, the recognition rates of the proposed method on the three databases are 99.98%, 97.95%, 97.96% respectively, indicating that the proposed method can effectively improve the performance of palm vein recognition system, and is more suitable for the practical applications of palm vein recognition.
Reference | Related Articles | Metrics
Security-risk-oriented distributed resource allocation method in power wireless private network
HUANG Xiuli, HUANG Jin, YU Pengfei, MIAO Weiwei, YANG Ruxia, LI Yijing, YU Peng
Journal of Computer Applications    2020, 40 (12): 3586-3593.   DOI: 10.11772/j.issn.1001-9081.2020040488
Abstract321)      PDF (2051KB)(350)       Save
Aiming at the problem of ensuring terminal communication in the scenarios of strong interference and high failure risk in the power wireless private network, a security-risk-oriented energy-efficient distributed resource allocation method was proposed. Firstly, the energy consumption compositions of the base stations were analyzed, and the resource allocation model of system energy efficiency maximization was established. Then, K-means++ algorithm was adopted to cluster the base stations in the network, so as to divide the whole network into several independent areas, and the high-risk base stations were separately processed in each cluster. Then, in each cluster, the high-risk base stations were turned into the sleep mode based on the risk values of the base stations, and the users under the high-risk base stations were transferred to other base stations in the same cluster. Finally, the transmission powers of normal base stations in clusters were optimized. Theoretical analysis and simulation experimental results show that, the clustering of base stations greatly reduces the complexity of base station sleeping as well as power optimization and allocation, and the overall network energy efficiency is increased from 0.158 9 Mb/J to 0.195 4 Mb/J after turning off the high-risk base stations. The proposed distributed resource allocation method can effectively improve the energy efficiency of system.
Reference | Related Articles | Metrics
Model compression method of convolution neural network based on feature-reuse
JI Shuwei, YANG Xiwang, HUANG Jinying, YIN Ning
Journal of Computer Applications    2019, 39 (6): 1607-1613.   DOI: 10.11772/j.issn.1001-9081.2018091992
Abstract457)      PDF (968KB)(285)       Save
In order to reduce the volume and computational complexity of the convolutional neural network model without reducing the accuracy, a compression method of convolutional neural network model based on feature reuse unit called FR-unit (Feature-Reuse unit) was proposed. Firstly, different optimization methods were proposed for different types of convolution neural network structures. Then, after convoluting the input feature map, the input feature was combined with output feature. Finally, the combined feature was transferred to the next layer. Through the reuse of low-level features, the total number of extracted features would not change, so as to ensure that the accuracy of optimized network would not change. The experimental results on CIFAR10 dataset show that, the volume of Visual Geometry Group (VGG) model is reduced to 75.4% and the prediction time is reduced to 43.5% after optimization, the volume of Resnet model is reduced to 53.1% and the prediction time is reduced to 60.9% after optimization, without reducing the accuracy on the test set.
Reference | Related Articles | Metrics
Hybrid precoding scheme based on improved particle swarm optimization algorithm in mmWave massive MIMO system
LI Renmin, HUANG Jinsong, CHEN Chen, WU Junqin
Journal of Computer Applications    2018, 38 (8): 2365-2369.   DOI: 10.11772/j.issn.1001-9081.2017123026
Abstract777)      PDF (803KB)(391)       Save
To address the problem that the hybrid precoding scheme based on traditional Particle Swarm Optimization (PSO) algorithm in millimeter Wave (mmWave) massive Multi-Input Multi-Output (MIMO) systems has a low convergence speed and is easy to fall into the local optimal value in the later iteration, a hybrid precoding scheme based on improved PSO algorithm was proposed. Firstly, the particles' position vector and velocity vector were initialized randomly, and the initial swarm optimal position vector was given by maximizing the system sum rate. Secondly, the position vector and velocity vector were updated, and two updated particles' individual-historical-best position vectors were randomly selected to get their weighted sum as the new individual-historical-best position vector, and then some of particles that maximized the system sum rate were picked out. The weighted average value of the individual-historical-best position vectors of these particles was taken as the new swarm optimal position vector and compared with the previous one. After many iterations, the final swarm optimal position vector was formed, which was the desired best hybrid precoding vector. The simulation results show that compared with the hybrid precoding scheme based on traditional PSO algorithm, the proposed scheme is optimized both in terms of convergence speed and sum rate. The convergence speed of the proposed scheme is improved by 100%, and its performance can reach 90% of the full digital precoding scheme. Therefore, the proposed scheme can effectively improve system performance and accelerate convergence.
Reference | Related Articles | Metrics
Incremental attribute reduction method for incomplete hybrid data with variable precision
WANG Yinglong, ZENG Qi, QIAN Wenbin, SHU Wenhao, HUANG Jintao
Journal of Computer Applications    2018, 38 (10): 2764-2771.   DOI: 10.11772/j.issn.1001-9081.2018041293
Abstract475)      PDF (1260KB)(306)       Save
In order to deal with the highly computational complexity of static attribute reduction when the data increasing dynamically in incomplete hybrid decision system, an incremental attribute reduction method was proposed for incomplete hybrid data with variable precision. The important degrees of attributes were measured by conditional entropy in the variable precision model. Then the incremental updating of conditional entropy and the updating mechanism of attribute reduction were analyzed and designed in detail when the data is dynamically increased. An incremental attribute reduction method was constructed by heuristic greedy strategy which can achieve the dynamical updating of attribute reduction of incomplete numeric and symbolic hybrid data. Through the experimental comparison and analysis of five real hybrid datasets in UCI, in terms of the reduction effects, when the incremental size of the Echocardiogram, Hepatitis, Autos, Credit and Dermatology increased to 90%+10%, the original number of attributes is reduced from 12, 19, 25, 17, 34 to 6, 7, 10, 11, 13, which is accounted for 50.0%, 36.8%, 40.0%, 64.7%, 38.2% of the original attribute set; in terms of the execution time, the average time consumed by the incremental algorithm in the five datasets is 2.99, 3.13, 9.70, 274.19, 50.87 seconds, and the average time consumed by the static algorithm is 284.92, 302.76, 1062.23, 3510.79, 667.85 seconds. The time-consuming of the incremental algorithm is related to the distribution of the instance size, the number of attributes, and the attribute value type of the data set. The experimental results show that the incremental attribute reduction algorithm is significantly superior to the static algorithm in time-consuming, and can effectively eliminate redundant attributes.
Reference | Related Articles | Metrics
Traffic sign recognition based on optimized convolutional neural network architecture
WANG Xiaobin, HUANG Jinjie, LIU Wenju
Journal of Computer Applications    2017, 37 (2): 530-534.   DOI: 10.11772/j.issn.1001-9081.2017.02.0530
Abstract546)      PDF (868KB)(895)       Save
In the existing algorithms for traffic sign recognition, sometimes the training time is short but the recognition rate is low, and other times the recognition rate is high but the training time is long. To resolve these problems, the Convolutional Neural Network (CNN) architecture was optimized by using Batch Normalization (BN) method, Greedy Layer-Wise Pretraining (GLP) method and replacing classifier with Support Vector Machine (SVM), and a new traffic sign recognition algorithm based on optimized CNN architecture was proposed. BN method was used to change the data distribution of the middle layer, and the output data of convolutional layer was normalized to the mean value of 0 and the variance value of 1, thus accelerating the training convergence and reducing the training time. By using the GLP method, the first layer of convolutional network was trained with its parameters preserved when the training was over, then the second layer was also trained with the parameters preserved until all the convolution layers were trained completely. The GLP method can effectively improve the recognition rate of the convolutional network. The SVM classifier only focused on the samples with error classification and no longer processed the correct samples, thus speeding up the training. The experiments were conducted on Germany traffic sign recognition benchmark, the results showed that compared with the traditional CNN, the training time of the new algorithm was reduced by 20.67%, and the recognition rate of the new algorithm reached 98.24%. The experimental results prove that the new algorithm greatly shortens the training time and reached a high recognition rate by optimizing the structure of the traditional CNN.
Reference | Related Articles | Metrics
Context and role based access control for cloud computing
HUANG Jingjing, FANG Qun
Journal of Computer Applications    2015, 35 (2): 393-396.   DOI: 10.11772/j.issn.1001-9081.2015.02.0393
Abstract522)      PDF (653KB)(486)       Save

The open and dynamic characteristics of cloud computing environment is easy to cause security problems, so security of the data resource and the privacy of user are facing severe challenges. According to the characteristics of dynamic user and data resources in cloud computing, a context and role based access control model was proposed. This model took context information and context restrict of cloud computing environment into account, and evaluated the user access request and the authorization policy in server, which could dynamically grant user's permission. The implementation process of cloud users accessing the resource were given, and the analysis and comparison further illuminated that the model has more advantages in the aspect of access control. This scheme can not only reduce the complexity of management, but also limit the privileges of cloud service providers, so it can effectively ensure the safety of cloud resources.

Reference | Related Articles | Metrics
Multiplicative watermarking algorithm based on wavelet visual model
Er-song HUANG Jin-hua LIU Ru-hong WEN
Journal of Computer Applications    2011, 31 (08): 2165-2168.   DOI: 10.3724/SP.J.1087.2011.02165
Abstract1293)      PDF (832KB)(857)       Save
The additive watermarking algorithm has good imperceptibility, however, the robustness of watermark is poor. As a result, a multiplicative image watermarking method was proposed by combining the visual model in the wavelet domain. In the proposed embedding scheme, the middle-frequency subband acted as the watermark embedding space, which was used to achieve the tradeoff between the imperceptibility and the robustness of watermarking system. Besides, the embedding strength factor was determined by considering the frequency masking, luminance masking and texture masking of host image. In the proposed detection scheme, the probability density function of wavelet coefficients was modeled by the Generalized Gaussian Distribution (GGD), and the watermark decision threshold was obtained by using the Neyman-Pearson (NP) criterion, and the Receiver Operating Characteristic (ROC) curve between the probability of false alarm and the probability of detection was derived. Finally, the robustness of the proposed watermarking was tested when being against common image processing attacks such as JPEG compression, Additive White Gaussian Noise (AWGN), scaling and cropping. The experimental results demonstrate that the proposed method has good detection performance and good robustness.
Reference | Related Articles | Metrics
Cross layer link adaptation scheme in wireless local area network
HUANG Jing-Lian
Journal of Computer Applications   
Abstract1736)      PDF (539KB)(808)       Save
To overcome the defect that IEEE 802.11 Wireless Local Area Network (WLAN) does not provide link adaptation scheme, a joint Medium Access Control (MAC) layer and physical (PHY) layer cross layer link adaptation scheme (CLLA) is proposed. The proposed CLLA scheme, which takes channel interference into consideration and distinguishes collision lose and interference lose, provides mathematical description of PHY layer and MAC layer respectively. CLLA also deduces the relationship of two layers in term of throughput. CLLA selects rate adaptively to improve throughput. Simulation results and comparisons with existed schemes prove that the proposed CLLA scheme not only adapts interference and channel variation, but also improves system throughput obviously.
Related Articles | Metrics
Energy efficiency analysis of DCF-based MAC protocol in Ad Hoc network
HUANG Jing-Lian
Journal of Computer Applications   
Abstract1548)      PDF (736KB)(876)       Save
IEEE 802.11 Medium Access Control (MAC) protocol provides a competition-based distributed channel access mechanism for mobile stations to share the wireless medium in wireless Ad Hoc networks. Under the precondition of variable packet length, an energy analytical method was proposed, which calculated the energy efficiency of the basic, the RTS/CTS (Request To Send/Clear To Send) and the hybrid access mechanisms of the IEEE 802.11 Distributed Coordination Function (DCF). Furthermore, the effects of the network size, the average packet length, maximum backoff time, and the initial contention window on the energy efficiency of IEEE 802.11 DCF were explored in detail. Correctness and effectiveness of the proposed analytical method were verified through detailed simulation results.
Related Articles | Metrics
Web QoS based on content switching
ZHU Li-cai,HUANG Jin-jin
Journal of Computer Applications    2005, 25 (12): 2966-2967.  
Abstract1332)      PDF (562KB)(1031)       Save
The principle of content switching was introduced,and the way to guarantee the Web QoS of the end to end through content switching was analyzed.An instance was given out to show how to realize Web QoS in host.
Related Articles | Metrics
Detection method of program buffer overflow based on Petri Net
HUANG Jin-zhi, HU Jian-sheng, LIAO Yun, CHAI Ren-wen
Journal of Computer Applications    2005, 25 (05): 1219-1221.   DOI: 10.3724/SP.J.1087.2005.1219
Abstract967)      PDF (183KB)(684)       Save
Most security problems of software result from the buffer overflow, therefore, in order to decrease security bug of software, a detection method of primary code buffer overflow based on color Petri Net was put forward. Its correctness and simpleness were proved by simulation using the CPNTools.This added a new method for the buffer overflow detection of software.
Related Articles | Metrics